42 research outputs found

    Performance Analysis of Sparse Recovery Based on Constrained Minimal Singular Values

    Full text link
    The stability of sparse signal reconstruction is investigated in this paper. We design efficient algorithms to verify the sufficient condition for unique β„“1\ell_1 sparse recovery. One of our algorithm produces comparable results with the state-of-the-art technique and performs orders of magnitude faster. We show that the β„“1\ell_1-constrained minimal singular value (β„“1\ell_1-CMSV) of the measurement matrix determines, in a very concise manner, the recovery performance of β„“1\ell_1-based algorithms such as the Basis Pursuit, the Dantzig selector, and the LASSO estimator. Compared with performance analysis involving the Restricted Isometry Constant, the arguments in this paper are much less complicated and provide more intuition on the stability of sparse signal recovery. We show also that, with high probability, the subgaussian ensemble generates measurement matrices with β„“1\ell_1-CMSVs bounded away from zero, as long as the number of measurements is relatively large. To compute the β„“1\ell_1-CMSV and its lower bound, we design two algorithms based on the interior point algorithm and the semi-definite relaxation

    Computable Performance Analysis of Recovering Signals with Low-dimensional Structures

    Get PDF
    The last decade witnessed the burgeoning development in the reconstruction of signals by exploiting their low-dimensional structures, particularly, the sparsity, the block-sparsity, the low-rankness, and the low-dimensional manifold structures of general nonlinear data sets. The reconstruction performance of these signals relies heavily on the structure of the sensing matrix/operator. In many applications, there is a flexibility to select the optimal sensing matrix among a class of them. A prerequisite for optimal sensing matrix design is the computability of the performance for different recovery algorithms. I present a computational framework for analyzing the recovery performance of signals with low-dimensional structures. I define a family of goodness measures for arbitrary sensing matrices as the optimal values of a set of optimization problems. As one of the primary contributions of this work, I associate the goodness measures with the fixed points of functions defined by a series of linear programs, second-order cone programs, or semidefinite programs, depending on the specific problem. This relation with the fixed-point theory, together with a bisection search implementation, yields efficient algorithms to compute the goodness measures with global convergence guarantees. As a by-product, we implement efficient algorithms to verify sufficient conditions for exact signal recovery in the noise-free case. The implementations perform orders-of-magnitude faster than the state-of-the-art techniques. The utility of these goodness measures lies in their relation with the reconstruction performance. I derive bounds on the recovery errors of convex relaxation algorithms in terms of these goodness measures. Using tools from empirical processes and generic chaining, I analytically demonstrate that as long as the number of measurements are relatively large, these goodness measures are bounded away from zeros for a large class of random sensing matrices, a result parallel to the probabilistic analysis of the restricted isometry property. Numerical experiments show that, compared with the restricted isometry based performance bounds, our error bounds apply to a wider range of problems and are tighter, when the sparsity levels of the signals are relatively low. I expect that computable performance bounds would open doors for wide applications in compressive sensing, sensor arrays, radar, MRI, image processing, computer vision, collaborative filtering, control, and many other areas where low-dimensional signal structures arise naturally

    Atomic norm denoising with applications to line spectral estimation

    Get PDF
    Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation that provides theoretical guarantees for the mean-squared-error (MSE) performance in the presence of noise and without knowledge of the model order. We propose an abstract theory of denoising with atomic norms and specialize this theory to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials. We show that the associated convex optimization problem can be solved in polynomial time via semidefinite programming (SDP). We also show that the SDP can be approximated by an l1-regularized least-squares problem that achieves nearly the same error rate as the SDP but can scale to much larger problems. We compare both SDP and l1-based approaches with classical line spectral analysis methods and demonstrate that the SDP outperforms the l1 optimization which outperforms MUSIC, Cadzow's, and Matrix Pencil approaches in terms of MSE over a wide range of signal-to-noise ratios.Comment: 27 pages, 10 figures. A preliminary version of this work appeared in the Proceedings of the 49th Annual Allerton Conference in September 2011. Numerous numerical experiments added to this version in accordance with suggestions by anonymous reviewer
    corecore